翻訳と辞書
Words near each other
・ Asymptotic analysis
・ Asymptotic computational complexity
・ Asymptotic curve
・ Asymptotic decider
・ Asymptotic distribution
・ Asymptotic efficiency
・ Asymptotic equipartition property
・ Asymptotic expansion
・ Asymptotic formula
・ Asymptotic freedom
・ Asymptotic gain model
・ Asymptotic giant branch
・ Asymptotic homogenization
・ Asymptotic safety in quantum gravity
・ Asymptotic theory
Asymptotic theory (statistics)
・ Asymptotic throughput
・ Asymptotically flat spacetime
・ Asymptotically optimal algorithm
・ Asymptotology
・ Asynapteron
・ Asynapteron contrarium
・ Asynapteron eburnigerum
・ Asynapteron equatorianum
・ Asynapteron glabriolum
・ Asynapteron inca
・ Asynapteron ranthum
・ Async Corp.
・ Asynchronous array of simple processors
・ Asynchronous Balanced Mode


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Asymptotic theory (statistics) : ウィキペディア英語版
Asymptotic theory (statistics)
In statistics, asymptotic theory, or large sample theory, is a generic framework for assessment of properties of estimators and statistical tests. Within this framework it is typically assumed that the sample size ''n'' grows indefinitely, and the properties of statistical procedures are evaluated in the limit as .
In practical applications, asymptotic theory is applied by treating the asymptotic results as approximately valid for finite sample sizes as well. Such approach is often criticized for not having any mathematical grounds behind it, yet it is used ubiquitously anyway. The importance of the asymptotic theory is that it often makes possible to carry out the analysis and state many results which cannot be obtained within the standard “finite-sample theory”.
==Overview==
Most statistical problems begin with a dataset of size ''n''. The asymptotic theory proceeds by assuming that it is possible to keep collecting additional data, so that the sample size would grow infinitely:
:
n \to \infty\,

Under this assumption many results can be obtained that are unavailable for samples of finite sizes. As an example consider the law of large numbers. This law states that for a sequence of iid random variables ''X''1, ''X''2, …, the sample averages \scriptstyle\overline_n converge in probability to the population mean E() as ''n'' → ∞. At the same time for finite ''n'' it is impossible to claim anything about the distribution of \scriptstyle\overline_n if the distributions of individual ''X''''i''’s is unknown.
For various models slightly different modes of asymptotics may be used:
* For cross-sectional data (iid) the new observations are sampled independently, from the same fixed distribution. This is the standard case of asymptotics.
* For longitudinal data (time series) the sampling method may differ from model to model. Sometimes the data is assumed to be ergodic, in other applications it can be integrated or cointegrated. In this case the asymptotic is again taken as the number of observations (usually denoted ''T'' in this case) goes to infinity: .
* For panel data, it is commonly assumed that one dimension in the data (''T'') remains fixed, whereas the other dimension grows: , .
Besides these standard approaches, various other “alternative” asymptotic approaches exist:
* Within the local asymptotic normality framework, it is assumed that the value of the “true parameter” in the model varies slightly with ''n'', such that the ''n''-th model corresponds to \scriptstyle\theta_n\,=\,\theta+h/\sqrt. This approach lets us study the regularity of estimators.
* When statistical tests are studied for their power to distinguish against the alternatives that are close to the null hypothesis, it is done within the so-called ''“local alternatives”'' framework: the null hypothesis is ''H''0: ''θ'' = ''θ''0, and the alternative is ''H''1: \scriptstyle\theta\,=\,\theta_0+h/\sqrt. This approach is especially popular for the unit root tests.
* There are models where the dimension of the parameter space Θ''n'' slowly expands with ''n'', reflecting the fact that the more observations a statistician has, the more he is tempted to introduce additional parameters in the model. An example of this is the weak instruments asymptotic.
* In kernel density estimation and kernel regression additional parameter — the bandwidth ''h'' — is assumed. In these models it is typically taken that ''h'' → 0 as ''n'' → ∞, however the rate of convergence must be chosen carefully, usually ''h'' ∝ ''n''−1/5.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Asymptotic theory (statistics)」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.